skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Park, Sung Min"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Machine unlearning---efficiently removing the effect of a small "forget set" of training data on a pre-trained machine learning model---has recently attracted significant research interest. Despite this interest, however, recent work shows that existing machine unlearning techniques do not hold up to thorough evaluation in non-convex settings. In this work, we introduce a new machine unlearning technique that exhibits strong empirical performance even in such challenging settings. Our starting point is the perspective that the goal of unlearning is to produce a model whose outputs are statistically indistinguishable from those of a model re-trained on all but the forget set. This perspective naturally suggests a reduction from the unlearning problem to that of *data attribution, where the goal is to predict the effect of changing the training set on a model's outputs. Thus motivated, we propose the following meta-algorithm, which we call Datamodel Matching (DMM): given a trained model, we (a) use data attribution to predict the output of the model if it were re-trained on all but the forget set points; then (b) fine-tune the pre-trained model to match these predicted outputs. In a simple convex setting, we show how this approach provably outperforms a variety of iterative unlearning algorithms. Empirically, we use a combination of existing evaluations and a new metric based on the KL-divergence to show that even in non-convex settings, DMM achieves strong unlearning performance relative to existing algorithms. An added benefit of DMM is that it is a meta-algorithm, in the sense that future advances in data attribution translate directly into better unlearning algorithms, pointing to a clear direction for future progress in unlearning. 
    more » « less
  2. null (Ed.)
    Abstract Dielectrics have long been considered as unsuitable for pure electrical switches; under weak electric fields, they show extremely low conductivity, whereas under strong fields, they suffer from irreversible damage. Here, we show that flexoelectricity enables damage-free exposure of dielectrics to strong electric fields, leading to reversible switching between electrical states—insulating and conducting. Applying strain gradients with an atomic force microscope tip polarizes an ultrathin film of an archetypal dielectric SrTiO 3 via flexoelectricity, which in turn generates non-destructive, strong electrostatic fields. When the applied strain gradient exceeds a certain value, SrTiO 3 suddenly becomes highly conductive, yielding at least around a 10 8 -fold decrease in room-temperature resistivity. We explain this phenomenon, which we call the colossal flexoresistance, based on the abrupt increase in the tunneling conductance of ultrathin SrTiO 3 under strain gradients. Our work extends the scope of electrical control in solids, and inspires further exploration of dielectric responses to strong electromechanical fields. 
    more » « less